14,179 research outputs found

    Random restricted partitions

    Full text link
    We study two types of probability measures on the set of integer partitions of nn with at most mm parts. The first one chooses the random partition with a chance related to its largest part only. We then obtain the limiting distributions of all of the parts together and that of the largest part as nn tends to infinity while mm is fixed or tends to infinity. In particular, if mm goes to infinity not fast enough, the largest part satisfies the central limit theorem. The second measure is very general. It includes the Dirichlet distribution and the uniform distribution as special cases. We derive the asymptotic distributions of the parts jointly and that of the largest part by taking limit of nn and mm in the same manner as that in the first probability measure.Comment: 32 page

    Integrated fault estimation and accommodation design for discrete-time Takagi-Sugeno fuzzy systems with actuator faults

    Get PDF
    This paper addresses the problem of integrated robust fault estimation (FE) and accommodation for discrete-time Takagi–Sugeno (T–S) fuzzy systems. First, a multiconstrained reduced-order FE observer (RFEO) is proposed to achieve FE for discrete-time T–S fuzzy models with actuator faults. Based on the RFEO, a new fault estimator is constructed. Then, using the information of online FE, a new approach for fault accommodation based on fuzzy-dynamic output feedback is designed to compensate for the effect of faults by stabilizing the closed-loop systems. Moreover, the RFEO and the dynamic output feedback fault-tolerant controller are designed separately, such that their design parameters can be calculated readily. Simulation results are presented to illustrate our contributions

    Revisiting Kernelized Locality-Sensitive Hashing for Improved Large-Scale Image Retrieval

    Full text link
    We present a simple but powerful reinterpretation of kernelized locality-sensitive hashing (KLSH), a general and popular method developed in the vision community for performing approximate nearest-neighbor searches in an arbitrary reproducing kernel Hilbert space (RKHS). Our new perspective is based on viewing the steps of the KLSH algorithm in an appropriately projected space, and has several key theoretical and practical benefits. First, it eliminates the problematic conceptual difficulties that are present in the existing motivation of KLSH. Second, it yields the first formal retrieval performance bounds for KLSH. Third, our analysis reveals two techniques for boosting the empirical performance of KLSH. We evaluate these extensions on several large-scale benchmark image retrieval data sets, and show that our analysis leads to improved recall performance of at least 12%, and sometimes much higher, over the standard KLSH method.Comment: 15 page
    corecore